word sentence
Read Beyond the Lines: Understanding the Implied Textual Meaning via a Skim and Intensive Reading Model
He, Guoxiu, Gao, Zhe, Jiang, Zhuoren, Kang, Yangyang, Sun, Changlong, Liu, Xiaozhong, Lu, Wei
The nonliteral interpretation of a text is hard to be understood by machine models due to its high context-sensitivity and heavy usage of figurative language. In this study, inspired by human reading comprehension, we propose a novel, simple, and effective deep neural framework, called Skim and Intensive Reading Model (SIRM), for figuring out implied textual meaning. The proposed SIRM consists of two main components, namely the skim reading component and intensive reading component. N-gram features are quickly extracted from the skim reading component, which is a combination of several convolutional neural networks, as skim (entire) information. An intensive reading component enables a hierarchical investigation for both local (sentence) and global (paragraph) representation, which encapsulates the current embedding and the contextual information with a dense connection. More specifically, the contextual information includes the near-neighbor information and the skim information mentioned above. Finally, besides the normal training loss function, we employ an adversarial loss function as a penalty over the skim reading component to eliminate noisy information arisen from special figurative words in the training data. To verify the effectiveness, robustness, and efficiency of the proposed architecture, we conduct extensive comparative experiments on several sarcasm benchmarks and an industrial spam dataset with metaphors. Experimental results indicate that (1) the proposed model, which benefits from context modeling and consideration of figurative language, outperforms existing state-of-the-art solutions, with comparable parameter scale and training speed; (2) the SIRM yields superior robustness in terms of parameter size sensitivity; (3) compared with ablation and addition variants of the SIRM, the final framework is efficient enough.
- Asia > China > Hubei Province > Wuhan (0.05)
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- North America > United States > Indiana (0.04)
- (4 more...)
'I love you' and 'thanks': Researchers reveal what we we type most
Researchers have revealed exactly what mobile phone users type most - and say that'I love you' is the most popular three word sentence. The team at SwiftKey analysed web data along with anonymous data from their hugely popular alternative keyboard, which has been downloaded to more than 100 million devices, and also found phone users are extremely polite. The single most commonly used one word sentence in English is'thanks' and the most popular two word phrase is'thank you'. What we REALLY type: The single most commonly used one word sentence in English is'thanks' and the most popular two word phrase is'thank you', according to mobile keyboard firm SwiftKey The firm has created alternative keyboard which are among the most download android apps, and recently released a version for Apple handsets. They use a machine learning algorithm to predict what a user will type, constantly refining its ability as it learns how users type.
- Information Technology > Artificial Intelligence > Machine Learning (0.93)
- Information Technology > Communications > Mobile (0.58)